Close

1. Identity statement
Reference TypeConference Paper (Conference Proceedings)
Sitesibgrapi.sid.inpe.br
Holder Codeibi 8JMKD3MGPEW34M/46T9EHH
Identifier8JMKD3MGPEW34M/45CPHC5
Repositorysid.inpe.br/sibgrapi/2021/09.05.17.11
Last Update2021:09.05.17.11.38 (UTC) administrator
Metadata Repositorysid.inpe.br/sibgrapi/2021/09.05.17.11.38
Metadata Last Update2022:06.14.00.00.26 (UTC) administrator
DOI10.1109/SIBGRAPI54419.2021.00016
Citation KeyVieiraOliv:2021:GaEsVi
TitleGaze estimation via self-attention augmented convolutions
FormatOn-line
Year2021
Access Date2024, May 18
Number of Files1
Size1497 KiB
2. Context
Author1 Vieira, Gabriel Lefundes
2 Oliveira, Luciano
Affiliation1 Federal University of Bahia 
2 Federal University of Bahia
EditorPaiva, Afonso
Menotti, David
Baranoski, Gladimir V. G.
Proença, Hugo Pedro
Junior, Antonio Lopes Apolinario
Papa, João Paulo
Pagliosa, Paulo
dos Santos, Thiago Oliveira
e Sá, Asla Medeiros
da Silveira, Thiago Lopes Trugillo
Brazil, Emilio Vital
Ponti, Moacir A.
Fernandes, Leandro A. F.
Avila, Sandra
e-Mail Addresslefundes.gabriel@gmail.com
Conference NameConference on Graphics, Patterns and Images, 34 (SIBGRAPI)
Conference LocationGramado, RS, Brazil (virtual)
Date18-22 Oct. 2021
PublisherIEEE Computer Society
Publisher CityLos Alamitos
Book TitleProceedings
Tertiary TypeFull Paper
History (UTC)2021-09-05 17:11:38 :: lefundes.gabriel@gmail.com -> administrator ::
2022-03-02 00:54:15 :: administrator -> menottid@gmail.com :: 2021
2022-03-02 13:39:55 :: menottid@gmail.com -> administrator :: 2021
2022-06-14 00:00:26 :: administrator -> :: 2021
3. Content and structure
Is the master or a copy?is the master
Content Stagecompleted
Transferable1
Version Typefinaldraft
Keywordsdeep learning
gaze estimation
attention-augmented convolutions
AbstractAlthough recently deep learning methods have boosted the accuracy of appearance-based gaze estimation, there is still room for improvement in the network architectures for this particular task. Hence we propose here a novel network architecture grounded on self-attention augmented convolutions to improve the quality of the learned features during the training of a shallower residual network. The rationale is that self-attention mechanism can help outperform deeper architectures by learning dependencies between distant regions in full-face images. This mechanism can also create better and more spatially-aware feature representations derived from the face and eye images before gaze regression. We dubbed our framework ARes-gaze, which explores our Attention-augmented ResNet (ARes-14) as twin convolutional backbones. In our experiments, results showed a decrease of the average angular error by 2.38% when compared to state-of-the-art methods on the MPIIFaceGaze data set, while achieving a second-place on the EyeDiap data set. It is noteworthy that our proposed framework was the only one to reach high accuracy simultaneously on both data sets.
Arrangement 1urlib.net > SDLA > Fonds > SIBGRAPI 2021 > Gaze estimation via...
Arrangement 2urlib.net > SDLA > Fonds > Full Index > Gaze estimation via...
doc Directory Contentaccess
source Directory Contentthere are no files
agreement Directory Content
agreement.html 05/09/2021 14:11 1.3 KiB 
4. Conditions of access and use
data URLhttp://urlib.net/ibi/8JMKD3MGPEW34M/45CPHC5
zipped data URLhttp://urlib.net/zip/8JMKD3MGPEW34M/45CPHC5
Languageen
Target Filegaze_attention_sibgrapi_2021_CAMERA_READY(1).pdf
User Grouplefundes.gabriel@gmail.com
Visibilityshown
Update Permissionnot transferred
5. Allied materials
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPEW34M/45PQ3RS
8JMKD3MGPEW34M/4742MCS
Citing Item Listsid.inpe.br/sibgrapi/2021/11.12.11.46 5
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
6. Notes
Empty Fieldsarchivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination edition electronicmailaddress group isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder schedulinginformation secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url volume


Close